本文介绍了由波浪和太阳能运行的低成本无人面车辆(USV)的原型,该车辆可用于最小化海洋数据收集的成本。目前的原型是一个紧凑的USV,长度为1.2米,可以通过两个人部署和恢复。该设计包括电动绞盘,可用于缩回和降低水下单元。设计的几个要素利用添加剂制造和廉价的材料。通过自定义开发的Web应用,可以使用射频(RF)和卫星通信来控制车辆。通过使用先前的研究工作和先进材料的推荐,在拖曳,提升,重量和价格方面进行了优化了表面和水下装置。通过测量几个参数,例如溶解的氧,盐度,温度和pH,USV可用于水状监测。
translated by 谷歌翻译
Community Question Answering (CQA) sites have spread and multiplied significantly in recent years. Sites like Reddit, Quora, and Stack Exchange are becoming popular amongst people interested in finding answers to diverse questions. One practical way of finding such answers is automatically predicting the best candidate given existing answers and comments. Many studies were conducted on answer prediction in CQA but with limited focus on using the background information of the questionnaires. We address this limitation using a novel method for predicting the best answers using the questioner's background information and other features, such as the textual content or the relationships with other participants. Our answer classification model was trained using the Stack Exchange dataset and validated using the Area Under the Curve (AUC) metric. The experimental results show that the proposed method complements previous methods by pointing out the importance of the relationships between users, particularly throughout the level of involvement in different communities on Stack Exchange. Furthermore, we point out that there is little overlap between user-relation information and the information represented by the shallow text features and the meta-features, such as time differences.
translated by 谷歌翻译
Although unsupervised domain adaptation methods have achieved remarkable performance in semantic scene segmentation in visual perception for self-driving cars, these approaches remain impractical in real-world use cases. In practice, the segmentation models may encounter new data that have not been seen yet. Also, the previous data training of segmentation models may be inaccessible due to privacy problems. Therefore, to address these problems, in this work, we propose a Continual Unsupervised Domain Adaptation (CONDA) approach that allows the model to continuously learn and adapt with respect to the presence of the new data. Moreover, our proposed approach is designed without the requirement of accessing previous training data. To avoid the catastrophic forgetting problem and maintain the performance of the segmentation models, we present a novel Bijective Maximum Likelihood loss to impose the constraint of predicted segmentation distribution shifts. The experimental results on the benchmark of continual unsupervised domain adaptation have shown the advanced performance of the proposed CONDA method.
translated by 谷歌翻译
当前的身份验证和可信系统依赖于经典和生物识别方法来识别或授权用户。这些方法包括音频语音识别,眼睛和手指签名。最近的工具利用深度学习和变压器来实现更好的结果。在本文中,我们使用Wav2Vec2.0和Hubert音频表示学习工具开发了阿拉伯语扬声器识别的深度学习构建模型。端到端Wav2Vec2.0范例通过随机掩蔽一组特征向量获取上下文化语音表示了解,然后应用变压器神经网络。我们使用了一个MLP分类器,可以区分不变的标记类。我们展示了几种实验结果,可以保护拟议模型的高精度。实验确保了某些扬声器的任意波信号分别可以分别在Wav2Vec2.0和Hubert的情况下以98%和97.1%的精度识别。
translated by 谷歌翻译
在单个组织中设计和评估时,机器学习(ML)在检测网络攻击中的用途是有效的。然而,通过利用源自若干来源的异构网络数据样本来设计基于ML的检测系统非常具有挑战性。这主要是由于隐私问题和缺乏数据集的普遍格式。在本文中,我们提出了协同联合学习计划来解决这些问题。拟议的框架允许多个组织在设计,培训和评估中加入强大的ML的网络入侵检测系统的武力。威胁情报方案利用其应用的两个关键方面;以通用格式提供网络数据流量的可用性,以允许在数据源上提取有意义的模式。其次,采用联合学习机制来避免在组织之间共享敏感用户信息的必要性。因此,每个组织都与其他组织网络威胁智能受益,同时在内部保持其数据的隐私。该模型在本地培训,只有更新的权重与剩余的参与者共享联合平均过程。通过使用称为NF-UNSW-NB15-V2和NF-BOT-IOT-V2的NETFOL格式的两个密钥数据集,在本文中设计和评估了该框架。在评估过程中考虑了另外两种常见情景;一种集中式培训方法,其中与其他组织共享本地数据样本和本地化培训方法,没有共享威胁情报。结果证明了通过设计通用ML模型的建议框架的效率和有效性,这些框架模型有效地分类源自多个组织的良性和侵入性流量,而无需当地数据交换。
translated by 谷歌翻译
A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems (NIDSs). Consequently, network interruptions and loss of sensitive data have occurred, which led to an active research area for improving NIDS technologies. In an analysis of related works, it was observed that most researchers aim to obtain better classification results by using a set of untried combinations of Feature Reduction (FR) and Machine Learning (ML) techniques on NIDS datasets. However, these datasets are different in feature sets, attack types, and network design. Therefore, this paper aims to discover whether these techniques can be generalised across various datasets. Six ML models are utilised: a Deep Feed Forward (DFF), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Decision Tree (DT), Logistic Regression (LR), and Naive Bayes (NB). The accuracy of three Feature Extraction (FE) algorithms; Principal Component Analysis (PCA), Auto-encoder (AE), and Linear Discriminant Analysis (LDA), are evaluated using three benchmark datasets: UNSW-NB15, ToN-IoT and CSE-CIC-IDS2018. Although PCA and AE algorithms have been widely used, the determination of their optimal number of extracted dimensions has been overlooked. The results indicate that no clear FE method or ML model can achieve the best scores for all datasets. The optimal number of extracted dimensions has been identified for each dataset, and LDA degrades the performance of the ML models on two datasets. The variance is used to analyse the extracted dimensions of LDA and PCA. Finally, this paper concludes that the choice of datasets significantly alters the performance of the applied techniques. We believe that a universal (benchmark) feature set is needed to facilitate further advancement and progress of research in this field.
translated by 谷歌翻译